Incomplete outcome data refers to missing data due to attrition or due to exclusion from the analysis. This can lead to imbalanced comparisons between groups. Important considerations include properly addressing incomplete data, keeping track of attrition and the reasons for it, and using intention-to-treat analysis. This can highlight whether there are systematic differences between groups that might bias the results.
While missing data is sometimes inescapable (e.g., losses to follow up) and some exclusions may be justifiable (e.g., participants who are randomized and then found to be ineligible), it is important to be transparent and provide details on exactly how participants progressed through the trial.
Tools
Appropriate handling of outcome data:
No missing outcome data
Reasons for missing outcome data unlikely to be related to true outcome (for survival data, censoring unlikely to be introducing bias)
Missing outcome data balanced in numbers across intervention groups, with similar reasons for missing data across groups
For dichotomous outcome data, the proportion of missing outcomes compared with observed event risk not enough to have a clinically relevant impact on the intervention effect estimate
For continuous outcome data, plausible effect size (difference in means or standardized difference in means) among missing outcomes not enough to have a clinically relevant impact on observed effect size
Missing data have been imputed using appropriate methods
CONSORT flow diagram: The CONSORT flow diagram outlines the flow of study participants through the stages of an RCT. This information can then be used to assess whether an intention-to-treat analysis has been conducted.
Imputation: Imputation is the substitution of some value for missing data. There are many different methods of imputing data, but all are associated with pros and cons, and there is no technique that is the best for all situations. Some guidance for deciding which method to use can be found at: www.missingdata.org.uk, but it is always best to consult a statistician. Techniques include:
Logic: missing value is deduced from edit rules
Mean: missing value is replaced by the mean of the respondents
Ratio: missing value is replaced by the adjusted value of another variable
Previous value (last observation carried forward): missing value is replaced by the value declared at the previous occasion
Unit trend: missing value is replaced by the value declared at the previous occasion, but adjusted according to the trend of the unit
Group trend: missing value is replaced by the value declared at the previous occasion, but adjusted according to a group trend
Regression: missing value is replaced by other variables' adjusted values
Imputation using a model: missing value is replaced by a value predicted using a model adjusted on the respondents
Hot-deck: missing value is replaced by a randomly chosen value from the respondents in the current file
Cold-deck: missing value is replaced by a randomly chosen value from the respondents in another file
Nearest neighbour: missing value is replaced by the nearest neighbour's value, according to a distance function based on one or more auxiliary variable
Imputation with residuals: missing value is replaced by a predicted value to which a randomly selected residual is added
Imputation with forced residuals: missing value is replaced by a predicted value to which a randomly selected residual is added but subject to constraints
Probability: in the case of (0,1) variables, the missing value is replaced by the probability of obtaining a value of 1
Nearest neighbour's trend: missing value is replaced by the value reported at a previous occasion modified according to the trend of the nearest neighbour
Nearest predicted value: missing value is replaced by the value which is nearest to the value predicted for the nonrespondent (hybrid method between model and donor imputation)
Logistic imputation followed by model imputation: logistic regression is used to determine the category and the missing value is replaced by a value predicted using a model adjusted on the respondents
On this page we've compiled a number of examples of risk of bias assessments - the good, the bad, and those that are a bit unclear. Feel free to work through them yourself and come up with an assessment oflow,unclear, orhighrisk of bias (our judgments and rationale are on theassessments page), or download a spreadsheet file with the same information. RoB assessments are divided up into the seven major domains: sequence generation, allocation concelment, blinding of participants/personnel, blinding of outcome assessors, incomplete outcome data, selective outcome reporting, and other sources of bias. A quotation is given with the article title following in brackets.
If you have other examples, please add them to the list!
Abraha I, Duca PG, Montedori A. Empirical evidence of bias: modified intention to treat analysis of randomised trials affects estimates of intervention efficacy. Z Evid Fortbild Qual Gesundhwes 2008;102(Suppl VI),9. [Cochrane]
Bell ML, Kenward MG, Fairclough DL, Horton NJ. Differential dropout and bias in randomised controlled trials: when it matters and when it may not. BMJ 2013; 346:e8668. [PubMed]
What is incomplete outcome data?
Table of Contents
While missing data is sometimes inescapable (e.g., losses to follow up) and some exclusions may be justifiable (e.g., participants who are randomized and then found to be ineligible), it is important to be transparent and provide details on exactly how participants progressed through the trial.
Tools
Appropriate handling of outcome data:
CONSORT flow diagram: The CONSORT flow diagram outlines the flow of study participants through the stages of an RCT. This information can then be used to assess whether an intention-to-treat analysis has been conducted.
Imputation: Imputation is the substitution of some value for missing data. There are many different methods of imputing data, but all are associated with pros and cons, and there is no technique that is the best for all situations. Some guidance for deciding which method to use can be found at: www.missingdata.org.uk, but it is always best to consult a statistician. Techniques include:
[back to top]
Examples
On this page we've compiled a number of examples of risk of bias assessments - the good, the bad, and those that are a bit unclear. Feel free to work through them yourself and come up with an assessment of low, unclear, or high risk of bias (our judgments and rationale are on the assessments page), or download a spreadsheet file with the same information. RoB assessments are divided up into the seven major domains: sequence generation, allocation concelment, blinding of participants/personnel, blinding of outcome assessors, incomplete outcome data, selective outcome reporting, and other sources of bias. A quotation is given with the article title following in brackets.
If you have other examples, please add them to the list!
Risk of Bias Guidelines
Download examples:
[back to top] [RoB Assessment Page]
References
[back to top]